我们提出了一种用于计算自动语音识别(ASR)中错误率的新方法。这个新的指标是针对包含半字符的语言,可以以不同形式编写相同的字符。我们在印地语中实施了我们的方法论,这是指示上下文中的主要语言之一,我们认为这种方法可扩展到包含大型字符集的其他类似语言。我们称我们的指标替代单词错误率(AWER)和替代字符错误率(ACER)。我们使用wav2Vec 2.0 \ cite {baevski2020wav2vec}训练我们的ASR模型。此外,我们使用语言模型来改善我们的模型性能。我们的结果表明,在分析单词和角色级别的错误率方面有了显着提高,ASR系统的可解释性提高了高达$ 3 $ \%的AWER,印地语的ACER $ 7 $ \%。我们的实验表明,在具有复杂发音的语言中,有多种写单词而不改变其含义的方式。在这种情况下,Awer和Acer将更有用,而不是将其作为指标。此外,我们通过新的公制脚本为印地语开了一个21小时的新基准测试数据集。
translated by 谷歌翻译
我们研究应用语言模型(LM)对指示语言自动语音识别(ASR)系统输出的影响。我们微调WAV2VEC $ 2.0 $型号的$ 18 $指示性语言,并通过根据各种来源派生的文本训练的语言模型调整结果。我们的发现表明,平均字符错误率(CER)降低了$ 28 $ \%,平均单词错误率(WER)在解码LM后降低了$ 36 $ \%。我们表明,与多样化的LM相比,大型LM可能无法提供实质性的改进。我们还证明,可以在特定于域的数据上获得高质量的转录,而无需重新培训ASR模型并显示了生物医学领域的结果。
translated by 谷歌翻译
培训多语言自动语音识别(ASR)系统具有挑战性,因为声学和词汇信息通常是特定于语言的。由于缺乏开源数据集和不同方法的结果,培训对Indo语言的多语言系统更加困难。我们将端到端多语言语音识别系统的性能与以语言识别(LID)为条件的单语模型的性能进行比较。来自多语言模型的解码信息用于语言识别,然后与单语模型结合使用,以改善跨语言的50%WER。我们还提出了一种类似的技术来解决代码切换问题,并在印度英语和孟加拉国英语中分别达到21.77和28.27。我们的工作谈到了如何将基于变压器的ASR尤其是WAV2VEC 2.0应用于开发用于指示语言的多语言ASR和代码转换ASR。
translated by 谷歌翻译
我们提出Vakyansh,这是一种用指示语言识别语音识别的端到端工具包。印度拥有近121种语言和大约125亿扬声器。然而,大多数语言在数据和预验证的模型方面都是低资源。通过Vakyansh,我们介绍了自动数据管道,用于数据创建,模型培训,模型评估和部署。我们以23个指示语言和Train Wav2Vec 2.0预验证的模型创建14,000小时的语音数据。然后,对这些预审预告措施的模型进行了修订,以创建18个指示语言的最先进的语音识别模型,其次是语言模型和标点符号修复模型。我们以使命开源所有这些资源,这将激发语音社区使用ASR模型以指示语言开发语音的首次应用程序。
translated by 谷歌翻译
有毒内容是今天社交媒体平台最关键的问题之一。仅在2020年的印度拥有51800万社交媒体用户。为了为内容创造者及其观众提供良好的体验,这对销售毒性评论和发布的用户至关重要。但由于存在多个相同文本的多个表示,大挑战是识别低资源目录语言中的毒性。此外,社交媒体的职位/评论不遵守特定格式,语法或句子结构;这使得滥用检测的任务更具挑战性的多语种社交媒体平台。本文介绍了使用ShareChat / MoJ提供的数据提出的Team'Moj Masti'提出的系统,以\ emph {iiit-d多语言滥用评论识别}挑战。我们专注于我们如何利用基于多语言变压器的预训练和微调模型来接近代码混合/代码切换的分类任务。我们最好的表演系统是XLM-Roberta和Muril的集合,在测试数据/排行榜上实现了0.9的平均f-1分数。我们还通过添加音译数据观察到性能的增加。此外,使用弱元数据,合奏和一些后处理技术提升了我们的系统的性能,从而将我们1在排行榜上放置。
translated by 谷歌翻译
我们介绍了一个CLSRIL-23,一个自我监督的基于学习的音频预训练模型,它学习了来自23个指示语言的原始音频的交叉语言语音表示。它基于Wav2Vec 2.0之上,通过培训蒙面潜在语音表示的对比任务来解决,并共同了解所有语言共享的潜伏的量化。我们在预磨练期间比较语言明智的损失,以比较单机和多语言预制的影响。还比较了一些下游微调任务的表现,并且我们的实验表明,在学习语音表示方面,我们的实验表明,在学习语言的语音表示方面,以及在沿着流的性能方面的学习语音表示。在Hindi中使用多语言预磨模模型时,在WER中观察到5%的减少,9.5%。所有代码模型也都是开放的。 CLSRIL-23是一款以23美元的价格培训的型号,以及近10,000小时的音频数据培训,以促进在语言中的语音识别研究。我们希望将使用自我监督方法创建新的最新状态,特别是对于低资源指示语言。
translated by 谷歌翻译
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译
Object movement identification is one of the most researched problems in the field of computer vision. In this task, we try to classify a pixel as foreground or background. Even though numerous traditional machine learning and deep learning methods already exist for this problem, the two major issues with most of them are the need for large amounts of ground truth data and their inferior performance on unseen videos. Since every pixel of every frame has to be labeled, acquiring large amounts of data for these techniques gets rather expensive. Recently, Zhao et al. [1] proposed one of a kind Arithmetic Distribution Neural Network (ADNN) for universal background subtraction which utilizes probability information from the histogram of temporal pixels and achieves promising results. Building onto this work, we developed an intelligent video surveillance system that uses ADNN architecture for motion detection, trims the video with parts only containing motion, and performs anomaly detection on the trimmed video.
translated by 谷歌翻译
The machine translation mechanism translates texts automatically between different natural languages, and Neural Machine Translation (NMT) has gained attention for its rational context analysis and fluent translation accuracy. However, processing low-resource languages that lack relevant training attributes like supervised data is a current challenge for Natural Language Processing (NLP). We incorporated a technique known Active Learning with the NMT toolkit Joey NMT to reach sufficient accuracy and robust predictions of low-resource language translation. With active learning, a semi-supervised machine learning strategy, the training algorithm determines which unlabeled data would be the most beneficial for obtaining labels using selected query techniques. We implemented two model-driven acquisition functions for selecting the samples to be validated. This work uses transformer-based NMT systems; baseline model (BM), fully trained model (FTM) , active learning least confidence based model (ALLCM), and active learning margin sampling based model (ALMSM) when translating English to Hindi. The Bilingual Evaluation Understudy (BLEU) metric has been used to evaluate system results. The BLEU scores of BM, FTM, ALLCM and ALMSM systems are 16.26, 22.56 , 24.54, and 24.20, respectively. The findings in this paper demonstrate that active learning techniques helps the model to converge early and improve the overall quality of the translation system.
translated by 谷歌翻译
We study the problem of planning under model uncertainty in an online meta-reinforcement learning (RL) setting where an agent is presented with a sequence of related tasks with limited interactions per task. The agent can use its experience in each task and across tasks to estimate both the transition model and the distribution over tasks. We propose an algorithm to meta-learn the underlying structure across tasks, utilize it to plan in each task, and upper-bound the regret of the planning loss. Our bound suggests that the average regret over tasks decreases as the number of tasks increases and as the tasks are more similar. In the classical single-task setting, it is known that the planning horizon should depend on the estimated model's accuracy, that is, on the number of samples within task. We generalize this finding to meta-RL and study this dependence of planning horizons on the number of tasks. Based on our theoretical findings, we derive heuristics for selecting slowly increasing discount factors, and we validate its significance empirically.
translated by 谷歌翻译